• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

±¹³» ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ±¹³» ³í¹®Áö > Çѱ¹Á¤º¸Åë½ÅÇÐȸ ³í¹®Áö (Journal of the Korea Institute of Information and Communication Engineering)

Çѱ¹Á¤º¸Åë½ÅÇÐȸ ³í¹®Áö (Journal of the Korea Institute of Information and Communication Engineering)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) Áö¿ªÀû °¡ÁßÄ¡ ÆĶó¹ÌÅÍ Á¦°Å¸¦ Àû¿ëÇÑ CNN ¸ðµ¨ ¾ÐÃà
¿µ¹®Á¦¸ñ(English Title) Apply Locally Weight Parameter Elimination for CNN Model Compression
ÀúÀÚ(Author) Su-chang Lim   Do-yeon Kim  
¿ø¹®¼ö·Ïó(Citation) VOL 22 NO. 09 PP. 1165 ~ 1171 (2018. 09)
Çѱ۳»¿ë
(Korean Abstract)
CNNÀº °´Ã¼ÀÇ Æ¯Â¡À» ÃßÃâÇÏ´Â °úÁ¤¿¡¼­ ¸¹Àº °è»ê·®°ú ¸Þ¸ð¸®¸¦ ¿ä±¸ÇÏ°í ÀÖ´Ù. ¶ÇÇÑ »ç¿ëÀÚ¿¡ ÀÇÇØ ³×Æ®¿öÅ©°¡ °íÁ¤µÇ¾î ÇнÀµÇ±â ¶§¹®¿¡ ÇнÀ µµÁß¿¡ ³×Æ®¿öÅ©ÀÇ ÇüŸ¦ ¼öÁ¤ÇÒ ¼ö ¾ø´Ù´Â °Í°ú ÄÄÇ»Æà ÀÚ¿øÀÌ ºÎÁ·ÇÑ ¸ð¹ÙÀÏ µð¹ÙÀ̽º¿¡¼­ »ç¿ëÇϱ⠾î·Æ´Ù´Â ´ÜÁ¡ÀÌ ÀÖ´Ù. ÀÌ·¯ÇÑ ¹®Á¦Á¡µéÀ» ÇØ°áÇϱâ À§ÇØ, ¿ì¸®´Â »çÀü ÇнÀµÈ °¡ÁßÄ¡ ÆÄÀÏ¿¡ °¡ÁöÄ¡±â ¹æ¹ýÀ» Àû¿ëÇÏ¿© ¿¬»ê·®°ú ¸Þ¸ð¸® ¿ä±¸·®À» ÁÙÀÌ°íÀÚ ÇÑ´Ù. ÀÌ ¹æ¹ýÀº 3´Ü°è·Î ÀÌ·ç¾îÁ® ÀÖ´Ù. ¸ÕÀú, ±âÁ¸¿¡ ÇнÀµÈ ³×Æ®¿öÅ© ÆÄÀÏÀÇ ¸ðµç °¡ÁßÄ¡¸¦ °¢ °èÃþ º°·Î ºÒ·¯¿Â´Ù. µÎ ¹ø°·Î, °¢ °èÃþÀÇ °¡ÁßÄ¡¿¡ Àý´ñ°ªÀ» ÃëÇÑ ÈÄ Æò±ÕÀ» ±¸ÇÑ´Ù. Æò±ÕÀ» ÀÓ°è°ªÀ¸·Î ¼³Á¤ÇÑ µÚ, ÀÓ°è °ª ÀÌÇÏ °¡ÁßÄ¡¸¦ Á¦°ÅÇÑ´Ù. ¸¶Áö¸·À¸·Î °¡ÁöÄ¡±â ¹æ¹ýÀ» Àû¿ëÇÑ ³×Æ®¿öÅ© ÆÄÀÏÀ» ÀçÇнÀÇÑ´Ù. ¿ì¸®´Â LeNet-5¿Í AlexNetÀ» ´ë»óÀ¸·Î ½ÇÇèÀ» ÇÏ¿´À¸¸ç, LeNet-5¿¡¼­ 31x, AlexNet¿¡¼­ 12xÀÇ ¾ÐÃà·üÀ» ´Þ¼º ÇÏ¿´´Ù.
¿µ¹®³»¿ë
(English Abstract)
CNN requires a large amount of computation and memory in the process of extracting the feature of the object. Also, It is trained from the network that the user has configured, and because the structure of the network is fixed, it can not be modified during training and it is also difficult to use it in a mobile device with low computing power. To solve these problems, we apply a pruning method to the pre-trained weight file to reduce computation and memory requirements. This method consists of three steps. First, all the weights of the pre-trained network file are retrieved for each layer. Second, take an absolute value for the weight of each layer and obtain the average. After setting the average to a threshold, remove the weight below the threshold. Finally, the network file applied the pruning method is re-trained. We experimented with LeNet-5 and AlexNet, achieved 31x on LeNet-5 and 12x on AlexNet.
Å°¿öµå(Keyword) CNN   °¡ÁöÄ¡±â   °¡ÁßÄ¡ ¾ÐÃà   ¸ðµ¨ ¾ÐÃà   CNN   Pruning   Parameter Compression   Model Compression  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå